371 research outputs found

    Legal Knowledge Extraction for Knowledge Graph Based Question-Answering

    Get PDF
    This paper presents the Open Knowledge Extraction (OKE) tools combined with natural language analysis of the sentence in order to enrich the semantic of the legal knowledge extracted from legal text. In particular the use case is on international private law with specific regard to the Rome I Regulation EC 593/2008, Rome II Regulation EC 864/2007, and Brussels I bis Regulation EU 1215/2012. A Knowledge Graph (KG) is built using OKE and Natural Language Processing (NLP) methods jointly with the main ontology design patterns defined for the legal domain (e.g., event, time, role, agent, right, obligations, jurisdiction). Using critical questions, underlined by legal experts in the domain, we have built a question answering tool capable to support the information retrieval and to answer to these queries. The system should help the legal expert to retrieve the relevant legal information connected with topics, concepts, entities, normative references in order to integrate his/her searching activities

    Making Things Explainable vs Explaining: Requirements and Challenges Under the GDPR

    Get PDF
    open3noAbstract. The European Union (EU) through the High-Level Expert Group on Artificial Intelligence (AI-HLEG) and the General Data Protection Regulation (GDPR) has recently posed an interesting challenge to the eXplainable AI (XAI) community, by demanding a more user-centred approach to explain Automated Decision-Making systems (ADMs). Looking at the relevant literature, XAI is currently focused on producing explainable software and explanations that generally follow an approach we could term One-Size-Fits-All, that is unable to meet a requirement of centring on user needs. One of the causes of this limit is the belief that making things explainable alone is enough to have pragmatic explanations. Thus, insisting on a clear separation between explainabilty (something that can be explained) and explanations, we point to explanatorY AI (YAI) as an alternative and more powerful approach to win the AI-HLEG challenge. YAI builds over XAI with the goal to collect and organize explainable information, articulating it into something we called user-centred explanatory discourses. Through the use of explanatory discourses/narratives we represent the problem of generating explanations for Automated Decision-Making systems (ADMs) into the identification of an appropriate path over an explanatory space, allowing explainees to interactively explore it and produce the explanation best suited to their needs.openSovrano, Francesco; Vitali, Fabio; Palmirani, MonicaSovrano, Francesco; Vitali, Fabio; Palmirani, Monic

    Learning a goal-oriented model for energy efficient adaptive applications in data centers

    Get PDF
    This work has been motivated by the growing demand of energy coming from the IT sector. We propose a goal-oriented approach where the state of the system is assessed using a set of indicators. These indicators are evaluated against thresholds that are used as goals of our system. We propose a self-adaptive context-aware framework, where we learn both the relations existing between the indicators and the effect of the available actions over the indicators state. The system is also able to respond to changes in the environment, keeping these relations updated to the current situation. Results have shown that the proposed methodology is able to create a network of relations between indicators and to propose an effective set of repair actions to contrast suboptimal states of the data center. The proposed framework is an important tool for assisting the system administrator in the management of a data center oriented towards Energy Efficiency (EE), showing him the connections occurring between the sometimes contrasting goals of the system and suggesting the most likely successful repair action(s) to improve the system state, both in terms of EE and QoS

    Context-aware Data Quality Assessment for Big Data

    Get PDF
    Big data changed the way in which we collect and analyze data. In particular, the amount of available information is constantly growing and organizations rely more and more on data analysis in order to achieve their competitive ad- vantage. However, such amount of data can create a real value only if combined with quality: good decisions and actions are the results of correct, reliable and complete data. In such a scenario, methods and techniques for the data quality assessment can support the identification of suitable data to process. If in tra- ditional database numerous assessment methods are proposed, in the big data scenario new algorithms have to be designed in order to deal with novel require- ments related to variety, volume and velocity issues. In particular, in this paper we highlight that dealing with heterogeneous sources requires an adaptive ap- proach able to trigger the suitable quality assessment methods on the basis of the data type and context in which data have to be used. Furthermore, we show that in some situations it is not possible to evaluate the quality of the entire dataset due to performance and time constraints. For this reason, we suggest to focus the data quality assessment only on a portion of the dataset and to take into account the consequent loss of accuracy by introducing a confidence factor as a measure of the reliability of the quality assessment procedure. We propose a methodology to build a data quality adapter module which selects the best configuration for the data quality assessment based on the user main require- ments: time minimization, confidence maximization, and budget minimization. Experiments are performed by considering real data gathered from a smart city case study

    Metrics, Explainability and the European AI Act Proposal

    Get PDF
    On 21 April 2021, the European Commission proposed the first legal framework on Artificial Intelligence (AI) to address the risks posed by this emerging method of computation. The Commission proposed a Regulation known as the AI Act. The proposed AI Act considers not only machine learning, but expert systems and statistical models long in place. Under the proposed AI Act, new obligations are set to ensure transparency, lawfulness, and fairness. Their goal is to establish mechanisms to ensure quality at launch and throughout the whole life cycle of AI-based systems, thus ensuring legal certainty that encourages innovation and investments on AI systems while preserving fundamental rights and values. A standardisation process is ongoing: several entities (e.g., ISO) and scholars are discussing how to design systems that are compliant with the forthcoming Act, and explainability metrics play a significant role. Specifically, the AI Act sets some new minimum requirements of explicability (transparency and explainability) for a list of AI systems labelled as “high-risk” listed in Annex III. These requirements include a plethora of technical explanations capable of covering the right amount of information, in a meaningful way. This paper aims to investigate how such technical explanations can be deemed to meet the minimum requirements set by the law and expected by society. To answer this question, with this paper we propose an analysis of the AI Act, aiming to understand (1) what specific explicability obligations are set and who shall comply with them and (2) whether any metric for measuring the degree of compliance of such explanatory documentation could be designed. Moreover, by envisaging the legal (or ethical) requirements that such a metric should possess, we discuss how to implement them in a practical way. More precisely, drawing inspiration from recent advancements in the theory of explanations, our analysis proposes that metrics to measure the kind of explainability endorsed by the proposed AI Act shall be risk-focused, model-agnostic, goal-aware, intelligible, and accessible. Therefore, we discuss the extent to which these requirements are met by the metrics currently under discussion

    A Framework to Support Digital Humanities and Cultural Heritage Studies Research

    Get PDF
    Developments in information and communication technologies and their repercussions for how cultural heritage is preserved, used and produced are the subject of several research and innovation efforts in Europe. Advanced digital technologies create new opportunities for cultural heritage to drive innovation. Digital humanities are an important domain for cultural heritage research in Europe and beyond. Digital tools and methods can be used in innovative ways in cultural heritage research. The research and innovation efforts and framework of digital humanities, and cultural heritage as one of its research fields, are influenced by EU policies and legislation. This article describes the existing policy initiatives, practices and related legal setting as framework conditions for digital humanities and cultural heritage research and innovation in Europe – focusing on urban history applications in the age of digital libraries. This is a multifaceted study of the state of the art in policies, legislation and standards – using a survey with 1000 participants, literature surveys on copyrights and policies

    Plasma concentration of presepsin and its relationship to the diagnosis of infections in multiple trauma patients admitted to intensive care

    Get PDF
    Background and aims: Septic complications represent the pre- dominant cause of late death in poly-trauma patients. The necessi- ty to differentiate septic from non septic patients is more relevant at the early stage of the illness in order to improve the clinical out- come and to reduce the mortality. The identification of a sensitive and specific, clinically reliable, biomarker capable to early recog- nize incoming septic complications in trauma patients whose expression is not influenced by concomitant traumatic injuries, is still a challenge for the researchers in the field. patients (9 females and 39 males, mean age 47.6\ub119 years) with mul- tiple trauma was performed. The inclusion criterion was to suffer from acute trauma since no more than 24 hours and the exclusion cri- teria were the following: antibiotic treatment on admission and main- tained for more than 48 hours; on-going infection on admission not associated with trauma; treatment with immunosuppressors/ immunomodulants; age <18 years old. Presepsin was measured using an automated chemiluminescence analyser at 1, 3, 5 and 8 days post of hospitalization. The diagnosis of systemic inflammatory response syndrome (SIRS)/infection was established according to the criteria of the Surviving Sepsis Campaign. Materials and methods: A retrospective analysis on 48 adult Results and conclusions: In patients with SIRS, the mean pre- sepsin concentration was 917,08 (\ub169.042) ng/L vs 980,258 (\ub11951.32) ng/L in patients without SIRS (P=0.769). In the infected patients, the mean presepsin concentration was 1513.25 (\ub12296.54) ng/L vs 654.21 (\ub1511,068) ng/L (P<0.05) calculated among the non infected upon admission. The plasma presepsin concentration increased progressively during the first 8 days of hospitalization. Presepsin concentration in the infected patients was significantly higher than in non-infected patients. On the other hands no signifi- cant differences were found in the plasma level of presepsin among patients with and without SIRS. Any other clinical condition related to the trauma did not affect presepsin. Our data clearly suggest that presepsin may be considered an helpful diagnostic tool to early diagnose sepsis in trauma patients
    corecore